![]() SYSTEM FOR LOCATING THE SAME VEHICLE IN SEVERAL ZONES DIFFERENT FROM ONE TO THE OTHER IN WHICH THIS
专利摘要:
The present invention relates to a system for locating the same vehicle (50) in at least two areas (110, 120, 121, 122) in which it passes consecutively which is characterized in that it comprises: - a pair of cameras (11 and 12) and, - at least one monocular camera (20, 21, 22), - a modeling unit (31), - a position estimation unit (32) provided to determine the coordinates of at least a point of said vehicle (50) at a second location of said area of interest (120, 12n) as a function of: - image signals of a monocular camera at a second location (20n), - of the model at the first location Mi of said vehicle (50) that said modeling unit (31) has established, and - information I1, and thereby locating said vehicle (50) at said second location. The present invention also relates to a localization method. The present invention also relates to a method and system for measuring the speed of a vehicle between two zones distant from each other, a method and a system for detecting the crossing of at least one line, a method and a instantaneous speed measurement system, as well as a computer program. 公开号:FR3020490A1 申请号:FR1453685 申请日:2014-04-24 公开日:2015-10-30 发明作者:Alain Rouh;Gouil Elise Le;Jean Beaudet 申请人:Morpho SA; IPC主号:
专利说明:
[0001] The present invention relates to a system for locating the same vehicle in several different zones from each other in which said vehicle passes consecutively. The present invention finds particular application in the evaluation of the average speed of vehicles between two different places, in the evaluation of the instantaneous speed at locations of each of said first and second zones and in line crossing detection, for example stop lines, level crossings, red lights, etc. In the control of road traffic, there are needs for locating the same vehicle in a first zone and then in a second zone different from said first zone. These requirements are for example in the measurement of the average speed of a vehicle on a section of a road with a precision allowing the metrological approval of the system. They can also be the detection of the crossing of lines, red light lines, with possibly recording a photograph with the vehicle on horseback on the line or the lines in question or before or after the line or lines. In general, the location of a vehicle in a given area can be carried out by means of a location system consisting of a camera which ensures the recognition of the vehicle, in particular of its registration plate, and of a sensor additional, such as a ground loop, to ensure the positioning accuracy of the vehicle. It can also be performed using a 3D location system consisting of a pair of cameras that ensures both the recognition of the vehicle and the determination of its positioning with sufficient accuracy. The location of the same vehicle in a first zone then in a second zone different from said first zone can then be performed by doubling one or the other of these simple localization systems, which implies a doubling of the costs of 'installation. In the control of road traffic, such as in the assessment of the average speed of vehicles, or the detection of crossing lines, there are also needs for locating the same vehicle in several different areas. Localization systems in several areas pose the same problems as those mentioned above. The object of the invention is to propose a system for locating the same vehicle in several different areas of each other which is simpler to implement and less expensive than the juxtaposition of simple positioning systems such as those which come to be mentioned. For this purpose, a system for locating the same vehicle in at least two zones in which it passes consecutively, according to the invention, is characterized in that it comprises: a pair of calibrated cameras whose respective fields of vision include the first zone of said zones, and, - at least one calibrated monocular camera whose field of view includes an area other than said first zone, - a modeling unit which is provided - to receive the image signals of each of the cameras of the pair of cameras, - to establish a model at a first location of the vehicle or a part of said vehicle and - to determine the coordinates of at least one point of said vehicle at said first location of said first area and thus the locate, and - a position estimation unit which is associated with a monocular camera of a zone other than the first zone and which is intended to determine the coordinates of at least one point of said va hicle at a second location of said area considered as a function of: - image signals of said monocular camera, - of the model at the first place of said vehicle which said modeling unit has established, - of information enabling said estimation unit position to establish a correspondence between at least one point of the image taken by one of the cameras and an image taken by the monocular camera, and thus locate said vehicle. According to another advantageous characteristic of the invention, the modeling unit determines the coordinates of a point or a discrete set of points that belong to the vehicle in question or a part thereof, when it is in the first place. place, said model in the first place comprising said coordinates. According to another advantageous characteristic of the invention, said one or more points of said discrete set of points belong to the same plane of a part of the vehicle under consideration, said model in the first place also including information defining said plane. According to another advantageous characteristic of the invention, said position estimation unit estimates the variation in laying said considered vehicle between said first location of the first area and the second location of the second area and deduces therefrom, for at least one point from the model in the first place, the coordinates of the point to the corresponding second place. According to another advantageous characteristic of the invention, for estimating the variation in laying said vehicle, said or each position estimation unit implements a method for non-linear optimization of the reprojection error. According to another advantageous characteristic of the invention, the modeling unit determines the coordinates of at least one point that belongs to the vehicle considered or a part thereof, when it is in the first place, said model to the first a place comprising, on the one hand, said coordinates and, on the other hand, information defining the plane of the road on which said vehicle under consideration is traveling or the height of said one or more points above said road. According to another advantageous characteristic of the invention, it further comprises at least one position estimation unit which is associated with a monocular camera of a zone other than the first zone and which is intended to act as a unit. modeling and thus establish a model at a location of said area considered vehicle or part of said vehicle and to transmit said model to a next position estimation unit. The present invention also relates to a system for estimating the average speed of a vehicle between two zones remote from each other, said system comprising: a location system for locating a same vehicle in a first location; a first zone of said zones and in a second location of a second zone of said zones; means for determining the duration of the journey of the vehicle between the first place and the second place; means for estimating the curvilinear distance between the first place; and said second location - means for determining the average speed of said vehicle from said duration and said curvilinear distance. [0002] According to the invention, said average speed estimation system is characterized in that said location system is a location system as just described. The present invention further relates to a vehicle crossing detection system of at least one line in a zone. According to the invention, said detection system is characterized in that said detection means consist of a localization system as just described, of which said modeling unit or said or each unit of estimation of The position is further provided with a tracking system for detecting the crossing of the line of the area in question. The present invention also relates to a system for measuring the instantaneous speed of a vehicle in a place of an area which is characterized in that it comprises a location system as just described. The present invention also relates to a method of locating the same vehicle in at least two areas in which it passes consecutively, said method being implemented by a location system as just described. The present invention also relates to a method for estimating the average speed of a vehicle between two different zones and distant from each other, a method of detecting the crossing of at least one line of a zone by a vehicle, and a program recorded on a medium and intended to be loaded into a programmable device of a tracking system, an average speed estimation system or a line crossing detection system, such as than those just described. The characteristics of the invention mentioned above, as well as others, will emerge more clearly on reading the following description of an exemplary embodiment, said description being given in relation to the attached drawings, among which: Fig. 1 is a schematic view of a locating system according to a first embodiment of the present invention; FIG. 2 is a schematic view of a locating system according to another embodiment of the present invention; FIG. 3 is a schematic view illustrating both the application for estimating the average speed of a vehicle between two locations of two respective zones and the application to the red light crossing detection of a location system according to the present invention; 4 is a schematic view of a processing unit of a location system according to the present invention; FIGS. 5a and 5b are flowcharts illustrating a vehicle locating method according to the present invention. The location system shown in FIG. 1 essentially comprises a pair of cameras 11 and 12 whose fields of view include a first zone 110 of a roadway 100 in a place, said first place, in which it is desired to locate a vehicle 50 rolling on said roadway 100 as well as a camera 20 whose field of view includes a second zone 120 of the road 100 different from said first zone 110, in a place, said second place, in which it is also desired to locate the same vehicle 50 after it has been located in the first place 110. In the present description, the expression "locate a vehicle in a place" means that one has knowledge of the position of the vehicle in the place in question. The pair of cameras 11 and 12 and the camera 20 are connected to the same image processing system 30 which consists of a modeling unit 31, a position estimation unit 32, a unit of position processing 33 and a ratio unit 34. The cameras 11, 12 and 20 are monocular cameras. The modeling unit 31 receives the image signals from the pair of cameras 11 and 12 and determines, from these signals, a 3D model of each vehicle 50 in front of them or a part of each vehicle 50, for example its rear part including its registration plate, or even only a remarkable point such as the barycenter of its license plate. The modeling unit 31 comprises for this purpose means for discriminating the vehicles which are in front of the cameras 11 and 12, for example by detecting their number plates. For more details, see Louka Dlagnekov's thesis at the University of California, San Diego, titled "Video-Based Car Surveillance: License Plate, Make and Model Recognition" published in 2005. As each model 3D considered here corresponds to a vehicle 50 which is in the first place inside the first zone 110, the model in question is hereinafter called "model in the first place". The cameras 11 and 12 are calibrated, which means that their intrinsic parameters are known and used by the modeling unit 31. These intrinsic parameters are for example given by the coefficients of a matrix K. Likewise, the extrinsic parameters of the one of the cameras, for example the camera 12, with respect to the other of the pair, are known and used by the modeling unit 31. The images respectively formed in so-called retinal planes by the cameras 11 and 12 of a point Pi with coordinates (x, y, z) in front of them (the points Pi are called "antecedent points") are respectively points pi and p'i of coordinates (u, v) and (u ', y') (The points pi and p'i are called "image points") which satisfy the following equations: 1 0 00 rP 1 r 0 1 0 0 [i = KLI3 00 1 0 (1) and P = K '[R12 T12i IL 1 where [R12 T12] (R12 is a rotation matrix and T12 is a translation vector) expresses the extrinsic parameters of the camera 12 with respect to the camera 11 and Xi and X'i are arbitrary factors, meaning that at the same point, an antecedent Pi corresponds to an infinity of image points pi and p'i. 13 is the unit matrix of dimensions 3 × 3. The images taken by the cameras 11 and 12 being given, the modeling unit 31 maps the image points pi and p'i as images of the same point. antecedent Pi (see arrow A) of the same object. This matching is known to those skilled in the art and can be accomplished by the method set forth in Lowe's article, David G. entitled "Distinctive Image Features From Scale-Invariant Keypoints" published in International Journal of Computer Vision 60.2 (2004) p91-110. Reference may also be made to the article by Bay H., Ess A., T. Tuytelaars, Van Gool L., entitled "Speeded-up robust features (SURF)" published in Computer Vision and Image Understanding, vol. 110, No. 3, p. 346-359, 2008. [0003] X'i 1 (2) Equations (1) and (2) above show that for each pair of image points Pi, p ', thus matched, we have a linear system of 6 equations with only 5 unknowns which are respectively the two factors X, and X ', and the three coordinates x, y, z of the same point P, antecedent of these points images pi and p' ,. It is therefore possible, from the images provided by the cameras 11 and 12 calibrated, to determine the coordinates (x, y, z) of at least one point P, of the vehicle 50 considered or a part thereof, such as that its rear including its license plate, or even a unique point P remarkable, such as the barycentre of its license plate. [0004] Hartley's book, Richard and Andrew Zisserman entitled "Multiple View Geometry in Computer Vision", Cambridge, 2000 can be consulted for the presentation of the mathematical model above and methods of stereoscopic modeling using two calibrated cameras, as briefly outlined here. The modeling unit 31 is provided to establish a model at a first location M1 of the vehicle 50 or a part thereof when in said first area 110. In FIG. 1, the model delivered by the modeling unit 31 is designated by the reference M, which reads: model in the first place. Three embodiments of the present invention according to the constitution of the model in question are proposed hereinafter. [0005] In the first embodiment, the model at the first place M, delivered by the modeling unit 31 is a discrete set of points P, with coordinates (x, y, z) belonging to the vehicle 50 considered or a part of that -this. Other points which also belong to the object but which do not belong to this discrete set can be obtained by extrapolation or interpolation of certain points of this set. This is a 3D model of the object under consideration. In this first embodiment, the position estimation unit 32 comprises means for receiving the image signals of the monocular camera 20 as well as means for receiving the model in the first place M, of the modeling unit. 31. [0006] Like the modeling unit 31, the position estimation unit 32 is provided with means for discriminating the vehicles passing in front of the camera 20. In addition, it is provided with means for recognizing the passing vehicles. in front of the camera 20, to select the model M1 from the unit 31 for a vehicle 50 previously detected at the first place 110 and which now corresponds to the vehicle recognized by the unit 32. The camera 20 is calibrated so that its parameters The images formed in the so-called retinal plane of the camera 20 of a point P "j of coordinates (x, y, z) belonging to a vehicle 50 when it is located are known in the form of a matrix K". in front of the camera 20 is the set of image points p "j of coordinates (u", v ") which satisfy the following relation: Fij = K" [13 (IL 1 At each point of a subset of the image points Pi of the image of the camera 20, the position estimation unit 32 corrects Answering an image point pi of one of the cameras, for example the camera 11, of the pair of cameras 11 and 12 (Fig.1, arrow B). To do this, as shown, the position estimation unit 32 receives the image signals from one of the cameras of the pair of cameras, in this case the camera 11. These image signals as well as the points pi projections of the points Pi of the model M1 in these images constitute information Ii allowing the unit 32 to establish a correspondence between at least one point Pi of an image taken by one of the cameras 11 or 12 and an image taken by the camera 20. This mapping can be performed by the same method that was used by the modeling unit 31 for the mapping of the image points pi and p'i respectively from the cameras 11 and 12. Then, using the first model 1 41 established by the modeling unit 31, the position estimation unit 32 matches, at each image point pi of the camera 11, a single point Pi from the model to the first one. place M1 when said vehicle 50 is in front of the camera 11 (Fig.1, arrow C) and finally at each image point Pi of the camera 20 a point Pi of the model in the first place Mi (Fig. 1, arrow D). The different indices i and j reflect the fact that it is not necessarily the same pairs of points that are considered for this mapping. However, the point P "i of the vehicle 50 which is then in front of the camera 20 corresponds to the same point Pi of the same vehicle 50 while it was in front of the camera 11 (Fig.1, arrow E). write: P "i = T, 1 X" ip !! (3) (4) where Rv and Tv are respectively a rotation matrix and a translation vector which translate a combination of the actual displacement of the vehicle 50 between the two places 110 and 120 and the change of point of view to switch from the camera 11 to the camera 20, a combination which is here called "a pose variation" of the vehicle 50. Expression (4) above reflects the relationship that exists between the coordinates of the points Pj of the vehicle 50 in the first place (that is to say in front of the pair of cameras 11 and 12) and the coordinates of the points P "j of the same vehicle 50 at the second place (ie say in front of the camera 20) (Fig.1, arrow E). The points Pj of the vehicle in the first place are present in the model in the first place M1. Once the coefficients of the matrix Rv and the components of the vector Tv have been determined, it is possible, knowing the model in the first place M1, to determine the coordinates of the points P "j of the vehicle 50 in the second place and thus, by taking up the terminology of the present description, to locate the vehicle 50 in the second place. [0007] According to this first embodiment of the invention, the position estimation unit 32 is designed to estimate the coefficients of the rotation matrix Rv and the components of the translation vector T. Given the expression (4) above, the expression (3) becomes: p !! = K "[R (5) 1 Now, this expression (5) reflects the fact that at each image point p" j of the image of the camera 20, there corresponds an infinity of points Pj of the model in the first place M1 possible according to the value taken by the parameter X "j for the point considered and according to the value of the parameters which define the rotation of matrix Rv and the translation of vector T. For each pair of points (p" j, Pi), the expression (5) above represents three equations (considered, for the purposes of the demonstration, independent of each other) comprising, on the one hand, independently of the pair of points (p ", 13) considered, six unknowns which are respectively three coefficients of the rotation matrix Rv and three components of the translation vector Tv and, on the other hand, for each given pair (p "j, Pj) of points, an unknown for the corresponding parameter X" j. we consider three pairs of points, so we have a linear system of 9 equations at 6 + 3 = 9 unknowns Thus, the resolution of expression (5) above is possible if we have at least three pairs of points p "j, P. Different methods can be used to solve the expression ( 5) above. [0008] For example, it can be solved by direct calculation provided that it has at least four pairs of points (pi, Pi). Indeed, equation (5) can be rewritten as vector products: ((p "] x [RT 1 _ = 0 (6) 1)" This is a linear system with 12 unknowns and 3 equations per pair of points For the resolution of this expression (6) for a relatively large number of pairs of points (p "j, Pj) (generally greater than three) thus giving a better precision of the estimation of the coefficients of rotation and translation components, according to the invention, it is proposed to implement a method for non-linear optimization of the reprojection error.Expression (5) above can also be written in the following way: X ", [l = 1 0 0 P. 0 1 0 1_ and (7) P. 0 1] K" [RT, IL 1 'Thus, for a given point Pj and given values of the coefficients of the matrix [R, Ta], equation (7) gives the coordinates of the theoretical image point p "cj (c for computed): ## EQU1 ## (8) pHiC [0 0 11K "[Rv TJ The reprojecti error we are the norm of the vector p "c, p", between the theoretical image point p "c, calculated and the image point p", of the image actually given by the camera 20 of this point P, (as explained above. above), ie: ei (9) The proposed method determines the coefficients of the rotation matrix R, and the components of the translation vector I ', by minimizing this reprojection error e. According to an exemplary embodiment, this method consists in determining the components of the translation vector I ', and the coefficients of the rotation matrix R, which minimize the sum of the squared error of projection for a set of points P, of the same object, vehicle or part of the vehicle: [R, T'y] = argmin / Cl (10) As already mentioned above, once the parameters of the matrix R, and of the vector Tv have been determined, it is possible, knowing the model M, in the first place and using the equation (4) above, to determine the coordinates of at least one point P ", of the vehicle 50 at the second place or of a part thereof. each vehicle 50 or a part thereof is not only detected in front of the cameras 11 and 12 but is also located in front of said two cameras 11 and 12 at a first location inside the first zone 110 of the roadway 100, because it is possible to determine the coordinates of the points or certain points P, which constitute his model in the first place M1. Similarly, it is located in front of the camera 20 at a second precise location of the second zone 120 of the road 100, because the coordinates of the points or certain points P ", second of said second zone 120 can be determined by the resolution of expression (4) above. [0009] In the first embodiment which has been previously described, no assumption has been made as to the shape of the object or part of the object in front of the cameras 11, 12 or 20. In the second embodiment which is now described, it is considered that the modeling unit 31 only establishes a model M1 of a flat part, for example the registration plate, of each vehicle 50 in front of the cameras 11 and 12. As will be seen subsequently, this model M1 comprises, in addition to the coordinates (x, y, z) of at least one point Pi, information which defines the plane, 'in which there is a discrete set of coordinate points P. x, y, z) of the registration plate of the vehicle in question. The transformation that passes, for a given point Pi (x, y, z), a point pi of the image formed in the retinal plane of the camera 11 to a point p'i of the image formed in the Retinal plane of the camera 12 is a homography. Thus, if equations (1) and (2) above are reduced to one, the new equation can be written in homogeneous coordinates: p'i = Hx pi (10) 1 1 where H is the matrix homography that passes points pi to points p'i. For a plane whose equation is: (x y z) r. n + d = 0 if K and K 'are the intrinsic parameters of the cameras 11 and 12, if R is a rotation matrix and T is a translation vector defining extrinsic parameters of these cameras 11 and 12, it can be shown that the matrix d homography H satisfies the following equation: (H = K 'R -T-K-1 (11) d) having a plurality of pairs of points (pi, p'i) paired, respectively images of the same points Pi all belonging to the same plane, in this case the registration plate of the vehicle in question, the modeling unit 31 can determine or estimate the coefficients of the matrix H, then knowing the intrinsic parameters K and K 'and extrinsic cameras 11 and 12 defined by the rotation matrix R and the translation vector T, determine the parameters n and d defining the plane and thus the relative position of the part of the object which merges with the plane. refer to the book entitled "Multiple View Geome try in Computer Vision "by R. Harley and A Zisserman published by Cambridge University Press, including Chapter 4 and Algorithm 4.1. [0010] The parameters n and d define the plane, to which belongs the flat part of the object in question, in this case the registration plate of the vehicle 50. They constitute, with the coordinates of at least one point Pi of the vehicle 50 at the first, a model of this vehicle (or part of vehicle) in front of the cameras 11 and 12. This model M1 is transmitted by the modeling unit 31 to the position estimation unit 32. As before, the latter receives the image signals of the monocular camera 20 as well as the model Mi. The camera 20 is calibrated so that its intrinsic parameters are known in the form of a matrix K. As previously also, the unit 32 has information Ii allowing him to establish a correspondence between at least one point Pi of an image taken by one of the cameras 11 or 12 and an image taken by the camera 20. This information Ii are in this case image signals received from one of two cameras 1 1 or 12 and points pi projections Pi points of the model M1 in these images. The position estimation unit 32 therefore first corresponds (see Fig. 1, the arrow B) to each image point Pi of the camera 20 an image point pi of one of the cameras, for example the camera 11, of the pair of cameras 11 and 12. The transformation which corresponds to a point pi of coordinates (u, v) in the image formed in the retinal plane of the camera 11 a point p "i of coordinates (u" , v ") in the image formed in the retinal plane of the camera 20 is a homography, so we can write: p = H 'Pi (12) 1 As before, the homography matrix H' can be write: Er = K "(nT 1 (-1 (13) R - T - PP d) where Rp is a rotation matrix and Tp a translation vector defining the pose change of the considered vehicle.30 Considering a plurality of pairs of points (pi, p "i) thus matched, the position estimation unit 32 calculates the coefficients of the homography matrix H 'passing from one point to another of the neck Moreover, by using the parameters n and d which have been transmitted to it by the modeling unit 31, it determines the rotation matrix Rp and the translation vector T. As for the first embodiment, once the coefficients have been determined, of the rotation matrix Rp and the components of the translation vector Tp, it is possible to determine, by the resolution of the expression (4) above, the coordinates of the points or of certain points P ", which belong to the plane in l occurrence of the license plate of the vehicle 50 while it is in said second zone 120. Thus, the same vehicle (50) can not only be located first within the first zone 110 of the vehicle. 100 but also the second place of the second zone 120 of the road 100. In the third embodiment of the present invention, the modeling unit 31 only deals with a single point P (x, y, z) remarkable of the vehicle located in front of the cameras 11 and 12. This remarkable point is for example the center of gravity of the license plate of the vehicle considered. In this embodiment, the cameras 11, 12 and 20 are calibrated with respect to the road considered as a plane, J defined by parameters n and d, such that the equation of the plane A5 can be written as: (xyz) r. n + d = 0 The height h of a point P of coordinates (x, y, z) with respect to the camera, is given by the relation: h = nT.P + d (14) The modeling unit 31 receives the image signals from the camera portion 11 and 12, determines the coordinates (u, v) of an image point p by the camera 11 of a remarkable point P of the vehicle 50 considered (in this case, the barycenter of the license plate) as well as the coordinates (u ', v') of a point p 'image by the camera 12 of the same point P. It determines, as has been previously described, the coordinates (x, y, z) of the point P and, by means of the relation (14) above, the height h with respect to the road of the point P. Finally, it transmits to the estimation unit 32, the model M1 which consists of the coordinates (x, y, z) of the point P and the height h. In this embodiment, the information available to the position estimation unit 32 for establishing a correspondence between the point p and the point p "is implicit in the form of the characteristic or features of the remarkable point in question. For example, this information is the fact that this remarkable point is the center of gravity of the registration plate of the vehicle considered.For the position estimation unit 32, a point p "image by the camera 20 of a point P "of coordinates (x", y ", z") satisfies the relation (3). Furthermore, the camera 20 being calibrated with respect to the road, like the camera 11, it is possible to write: h "= n" T. P "+ d" (16) where the parameters n ", d" define a plane belonging to the road with respect to this camera 20, equation plane: (xyz) rn "+ d" = 0 The unit of position estimation 32 considers that the height of the remarkable point is constant and therefore that h = h ", we can notice that we have a linear system of four equations (three for the relation (3), then one for the relation ( 16), for four unknowns: the coordinates (x ", y", z ") of the point P" and X ". It is therefore possible to determine the coordinates of the point P" Thus, the same vehicle 50 can be located, by determining the coordinates of the remarkable point P, firstly inside the first zone 110 of the roadway 100 and, by determining the coordinates of the point P ", at the second location of the second zone 120 of the roadway 100. [0011] A vehicle tracking system according to the invention can provide a fourth camera whose field of view includes a third zone of the roadway 100 and which would locate the vehicle 50 in front of this camera after it has been located in the first place. the first zone in front of the cameras 11 and 12. The locating process in front of this fourth camera would be the same as that described for the third camera 20. Thus, not only would it be possible to obtain a model in a first place at the interior of the first zone 110 and a model at a second location within the second zone 120, but also a model at a third location within the third zone. It is still possible to envision as many monocular cameras as is desirable. It is shown in FIG. 2, another exemplary embodiment of a vehicle location system 50 according to the present invention, here in several areas of a roadway 100 (here 4 zones 110, 120, 121 and 122 and no longer two zones as to Fig. 1) which are different from each other. The vehicle 50 passes in these areas consecutively in this order. The order of passage of the vehicle may nevertheless be different and, in this case, the locations and the associated processes in the zones 120, 121, 122 can not be calculated until the vehicle has passed through the zone 110. a pair of cameras 11 and 12 which are calibrated and whose respective fields of view include the first zone 110 of said zones, and, for each of the other zones 120, 121 and 122, a calibrated 20 to 22 monocular camera whose field of vision includes said other area considered (respectively 120 to 122). This locating system comprises a modeling unit 31 which is designed to receive the image signals from each of the cameras 11 and 12 of the pair of cameras and to establish a model M1 of the vehicle (or a part of said vehicle) to a first locus of said first zone 110. Thus will be located the vehicle 50 at said first location, as explained above. For the monocular camera 20 of the zone 120, the location system comprises a position estimation unit 320 which is designed to receive the image signals of said monocular camera 20 as well as the model at the first location M1 of the zone 110 preceding said considered zone 120 delivered by the modeling unit 31 and deduce the coordinates of the points of the vehicle 50 at a second location and thus locate the vehicle 50 considered second. From these coordinates, the position estimation unit 320 acts in the manner of the modeling unit 31 and establishes a model M2 of the vehicle 50 (or a part of said vehicle) at a location in said considered zone. 120. This new M2 model is called "second place model 2". For the monocular camera 21 of the zone 121, the locating system comprises a position estimation unit 321 which is designed to receive the image signals of said monocular camera 21 as well as the model at the first place M1 of a zone 110 preceding said considered zone 121 issued by the modeling unit 31 and to determine the coordinates of the points of the vehicle 50 (or a part of said vehicle) at a location of said area 121 and thus locate said vehicle at said location. The position estimation unit 32 acts in the manner of the position estimation unit 32 of FIG. 1. For the monocular camera 22 of the zone 122, the locating system comprises a position estimation unit 322 which is provided to receive the image signals of said monocular camera 22 as well as the model at a previous location which This is not the first place but the second place. This model is therefore the model in the second place M2. It determines the coordinates of the points of the vehicle 50 (or a part thereof) at a location of said area 122 and thus locate said vehicle at said location. To do this, it acts in the manner of the position estimation unit 32 of FIG. 1. Generally, for each monocular camera 2n of a zone 12n (where n is any integer), a position estimation unit 32n is provided to receive the image signals of said monocular camera 2n and the model at a location of an area 12m (m <n) preceding said considered zone 12n, said preceding place, delivered either by the modeling unit 31 (m = 0) or by a position estimation unit 32m of a monocular camera 2m from a zone 12m preceding said zone 12n and to determine the coordinates of the vehicle or a part of said vehicle at a location of said zone 12n and thus locate said vehicle at said location. It is now envisaged a first particular application of the localization system which has just been described. This application concerns the evaluation of the average speed of vehicles between two different places. It is shown in FIG. 3, in the form of tetrahedrons, the marks R11 and R20 which are respectively linked to the cameras 11 and 20. The point P is one or one of the points of the model Ma at a first location of the vehicle 50 considered at the instant ta where it was located in front of the cameras 11 and 12 (in this case, a = 1) or in front of the aerane monocular camera 20 (see Fig. 2) while the point P "is a point or one of the points of the model Mb at a second location of the same vehicle 50, but at the instant tb where it has been located in front of the camera 20 (b> a), if V is a unit vector tangent to the estimated trajectory of the vehicle 50 while it is found in front of the camera 11, the curvilinear abscissa of the point P in the reference R11 is estimated by the scalar product of the vector V by the vector OP (0 origin of the reference Ru) Similarly, if V "is a unit vector tangent to the estimated trajectory of the vehicle 50 while it is in front of the camera 20, the curvilinear abscissa of the point P "in the reference R 20 is estimated by the scalar product of the vector V "by the vector 0" P "(0" origin of the reference R20): e = v ". 0 "P" If s13 is the curvilinear distance between the cameras 11 and 30, the distance traveled d (P, P ") by the vehicle 50 between the instant ta where it is located in front of the cameras 11 and 12 and the instant tb where it is located in front of the camera 20 is given by the following expression: d (P, P "): s13 + s" - s The average speed of the vehicle 50 between the localization instants ta and tb then corresponds to the distance traveled per unit time and is therefore given by the following relation: Vrboy. = d (P, P ") / (tb - ta) The vectors OP and 0" P "are respectively determined by the 3D modeling unit 31 and the position estimation unit 32 (or by two different position estimation units, for example in the case of Fig. 2, the position estimation units 320 and 321, or 321 and 322). As for the vectors V and V-> "tangent to the trajectory of the vehicle 50, various methods for determining them are proposed here, according to a first exemplary method, these vectors have the direction of the local direction of the road. vehicle 50 has, locally, a trajectory which follows the profile of the road that it impresses, according to a second exemplary method, the modeling unit 31 and the position estimation unit 32, 320 are provided with tracking systems ) for obtaining the models Ma and Mb of a vehicle 50 in front of the cameras 11 and 12 and the cameras 20 in at least two successive positions Po, P1 and P "o, P" 1 respectively in the first and second places. vectors V and V "are respectively collinear with the vectors 13, 131 and P", P ,,. In this first application, the unit 33 calculates, as just indicated, the average speed of the vehicle 50 between the points P and P "and deliver this information to the report unit 34. This provides a report 34 accompanied, for example, by at least one photograph taken by one of the cameras 11 or 12 or 20 or another camera provided for this purpose (not shown), according to the envisaged use of this report. It will be noted that the tracking system of the modeling unit 31 also makes it possible to measure the instantaneous speed of the vehicle in the first place in front of the cameras 11 and 12. At the same time as it determines the two successive positions Po 20 and P1, the modeling unit 31 can record the corresponding times to and ti. The instantaneous speed of the vehicle 50 in the first place is given by the following relation: Vstant = "Po, Pi) / (ti - to) 25 Similarly, the tracking system of the unit or unit of estimation of Position 32 also measures the instantaneous speed of the vehicle to another location. Indeed, at the same time as it determines the two successive positions P "0 and P" 1, this unit 32 can record the corresponding times t "0 and ei.The instantaneous speed of the vehicle 50 at this location is given by the relation Next: instantaneous = Cl (PHO, P "1) / (t" 1-t "o) Another specific application of the locating system just described is now contemplated. This application concerns line crossing detection. It is also described in connection with FIG. 3. In FIG. 3, the crossing lines Li and L2 are generators of two vertical planes, A 'and For this application, the modeling unit 31 (or a position estimation unit 32) is provided with a tracking system ( tracking in English) making it possible to obtain several successive positions Po, Pi and P of an object (vehicle or part of a vehicle) in a first place and thus to determine the particular position P corresponding to that of the crossing by said object of the line For example, this crossing will be considered as taking place when at least one particular point of said object will belong to the vertical plane. Similarly, the unit or a position estimation unit 32, provided with a tracking system. (tracking in English) makes it possible to obtain several successive positions P "o, P" 1 and P "of the object (vehicle 50 or part of vehicle) in front of the camera 20 considered and thus to determine a particular position P" corresponding to c it of the crossing by said object of the crossing line L2. Its crossing by an object, in this case a vehicle or a part thereof, will be considered to take place when at least one particular point of said object (which may be identical or different from the point for crossing the first line L1) will belong to the vertical plane The position estimation unit 33 can determine the times to and ti of crossing lines Li and L2. The curvilinear distance between the lines Li and L2 being known, the position estimation unit 33 can thus determine the average speed of the vehicle 50 between the lines Li and L2. The unit 33 can deliver this information to the reporting unit 34 which in turn delivers a report 34 accompanied, for example, by at least one photograph taken by one of the cameras 11, 12 or 20 or a camera dedicated thereto. effect, depending on the intended use of this report. Another specific application of the locating system described above is now contemplated, where the lines Li and L2 are red light crossing lines. According to the French legislation given here by way of example, the passage of a red light is defined as first crossing a first Li crossing line, called the fire effect line, and then crossing the line. a second crossing line L2, called the fire line, parallel to the line Li and distant from it, for example the opposite of the cross way that the red light prevents crossing. Two photographs are taken, one crossing the line of effect Li, the other crossing the fire line L2. [0012] As previously, for this application, the modeling unit 31 or a position estimation unit 32 is provided with a tracking system (tracking in English) making it possible to obtain several successive positions Po, Pi and P of said object ( vehicle or part of a vehicle) (see, for example, Louka Dlagnekov's thesis already mentioned above and in particular chapter 3.1 devoted to the tracking of license plates) and thus to determine the particular position P corresponding to that of the crossing by said object of the crossing line Li. For example, this crossing will be considered as taking place when at least one point of said object will belong to the vertical plane. When this crossing takes place, a first photograph of the vehicle 50 is taken crossing the line d. Li effect. As previously also, the unit or a position estimation unit 32 provided with a tracking system (tracking in English) allows to obtain several successive positions P "0, P" 1 and P "of the object (vehicle 50 or part of vehicle) in front of the camera 20 and thus to determine a particular position P" corresponding to that of the crossing by said object of the crossing line L2. Its crossing by an object, in this case a vehicle or a part of this vehicle, will be considered as taking place when all the points of said vehicle have exceeded the vertical plane. When this crossing takes place, a second photograph of the vehicle 50 is taken. crossing the L2 fire line. In addition, the estimating unit 32 is provided with means for associating the vehicles passing in front of the camera 20, in order to select the model M1 coming from the unit 31 for a vehicle 50 previously followed in the first place 110 and which now corresponds to the vehicle recognized by the unit 32. For this application, the field of the camera 20 will advantageously comprise a covering area with the field of the cameras 11 and 12, in this case this association can be achieved with the spatial criteria resulting from the monitoring . The reporting unit 34 is provided to issue a report which includes the first photograph of the vehicle 50 including its license plate at the time of its crossing of the fire effect line Li and the second photograph of the vehicle 50 at the time of crossing the L2 fire line. It is shown in FIG. 4 a computer machine 300 which could constitute the processing unit 30. This machine 300 consists of a central processing unit 301 (Central Processing Unit = CPU) which is connected to a memory 302, to ports 311 and 312 for to review the image signals that are respectively derived from the cameras 11 and 12, at least one port 320 for receiving the image signals from at least one camera 20 and a port 334 which is provided to output a report to destination of a suitable human-machine interface (not shown). [0013] The memory 302 contains a program which comprises instructions or parts of code which, when executed by the central processing unit 301, implements a method of locating the same vehicle in several different zones, one of the other. This method is described in relation to FIGS. 5a and 5b. Step E1 is a modeling step in the first place to establish, from the image signals received from each of the cameras 11 and 12 of the pair of cameras, a model M, of the vehicle 50 or a part of said vehicle. 50 at a first location of said first area and thereby locate said vehicle at said first location. This step El corresponds to what has been described previously in relation with the modeling unit 31. [0014] Step E2 is a position estimation step which determines, from the image signals received from said monocular camera 20, at least one image received from one of the cameras 11 or 12 in the first place as well as the first model M1 established in step E1 (or a model M, at a location of a zone preceding said zone considered established at a preceding step E2), the coordinates of at least one point of the vehicle at a distance of location of said area and thus locate said vehicle or part of vehicle at said location. Optionally, it also determines a model M, said vehicle 50 or part of a vehicle at a location of said area. This step E2 corresponds to what has been previously described in relation to the position estimation unit 32. [0015] It will be understood that several steps E2 can be implemented at each monocular camera. In FIG. 5b, the step E2 is detailed and comprises a step E20 for making the correspondence between image points of said vehicle 50 or part of the vehicle present in the image delivered by said monocular camera 20 and points of said model at a previous location (by example, the first place), and a step E21 of estimating the variation of laying said vehicle 50 to deduce the position of the points of said 3D model instead of said area considered.
权利要求:
Claims (21) [0001] CLAIMS1) Locating system of the same vehicle (50) in at least two areas (110, 120, 121, 122) in which it passes consecutively, characterized in that it comprises: - a pair of cameras (11 and 12) calibrated whose respective fields of view include the first area (110) of said areas, and, - at least one calibrated monocular camera (20, 21, 22) whose field of view includes an area other than said first area, a modeling unit (31) which is provided to receive the image signals from each of the cameras (11 and 12) of the pair of cameras; to establish a model at a first location Mi of the vehicle (50); a portion of said vehicle (50) and 15 - for determining the coordinates of at least one point of said vehicle (50) at said first location of said first area (110) and thereby locating it, and - a unit of position estimation (32) which is associated with a monocular camera (20, 2n) (n integer) of a zone (12) 0, 12n) other than the first zone (110) and which is provided for determining the coordinates of at least one point of said vehicle (50) at a second location of said area under consideration (120, 12n) as a function of: image signals of said monocular camera (20n), - of the model at the first place MI of said vehicle (50) that said modeling unit (31) has established, 25 - of information I, allowing said estimation unit of position (32) to match at least one point of the image taken by one of the cameras (11 and 12) and an image taken by the monocular camera (20n), and thereby locate said vehicle (50) . 30 [0002] 2) Location system according to claim 1, characterized in that the modeling unit (31) determines the coordinates (x, y, z) of a point (P,) or a discrete set of points (Pi ) which belong to the vehicle (50) considered or part thereof, when it is in the first place, said model in the first place Mi comprising said coordinates. [0003] 3) locating system according to claim 2, characterized in that said one or more points (Pi) of said discrete set of points belong to the same plane of a part of the vehicle (50) considered, said model in the first place also including information defining said plan. [0004] 4) locating system according to one of claims 2 or 3, characterized in that said position estimation unit (32) estimates the variation of laying said vehicle (50) considered between said first location of the first area (110) and the second place of the second area (120, 12n) and deduces therefrom, for at least one point (Pi) of the model at the first place Mi, the coordinates of the point (P ",) at the second corresponding place. [0005] 5) Location system according to claim 4, characterized in that, to estimate the pose variation of said vehicle, said or each position estimation unit implements a non-linear optimization method of the reprojection error . [0006] 6) Location system according to claim 1, characterized in that the modeling unit (31) determines the coordinates (x, y, z) of at least one point (Pi) which belongs to the vehicle (50) considered or a part of it, when it is in the first place, said model in the first place Mi comprising, on the one hand, said coordinates and, on the other hand, information defining the plane of the road on which said vehicle considered (50) flows or the height of said point (s) (P,) above said road. [0007] 7) Location system according to one of the preceding claims, characterized in that it further comprises at least one position estimation unit (32m) which is associated with a monocular camera (2m) (m whole) of a zone (12m) other than the first zone (110) and which is intended to act as a modeling unit and thus to establish a model at a location in the said area Mm of the vehicle (50) or a part of said vehicle ( 50) and for transmitting said model 1 ^ 4 'to a next position estimation unit (32n). [0008] 8) System for estimating the average speed of a vehicle between two zones distant from each other, said system comprising: a location system for locating a same vehicle in a first location of a first zone of said zones and in a second place of a second zone of said zones, means for determining the duration of the journey of the vehicle between the first place and the second place, means for estimating the curvilinear distance between the first place and said second place means for determining the average speed of said vehicle from said duration and said curvilinear distance, characterized in that said locating system is a locating system according to one of claims 1 to 7. [0009] 9) System for detecting crossing a vehicle of at least one line in an area, characterized in that said detection means consist of a location system according to one of claims 1 to 7, including said modeling unit or said or each position estimation unit is further provided with a tracking system for detecting the crossing of the line of the area under consideration. [0010] 10) A system for measuring the instantaneous speed of a vehicle in a place of an area, consisting of locating said vehicle in said place and measuring the instantaneous speed of said vehicle at the place where it has been located, characterized in that it comprises a locating system according to one of claims 1 to 7 for locating said vehicle at said location. [0011] 11) A method of locating the same vehicle (50) in at least two zones (110, 120) in which it passes consecutively, said method being implemented by a location system which comprises: a pair of cameras (11); and [0012] 12) whose respective fields of view include the first zone (110) of said zones, and, - a calibrated monocular camera (20, 21, 22) whose field of vision includes a zone other than said first zone, - characterized in that it comprises the following steps: - a modeling step for establishing, from the image signals received from each of the cameras (11 and 12) of the pair of cameras, a model at a first location MI of the vehicle ( 50) or a portion of said vehicle (50) and for determining the coordinates of at least one point of said vehicle (50) at said first location of said first area (110) and thereby locating it, and - for each camera monocular (20, 2n) (n integer) of a zone (120, 12n) other than the first zone (110), a position estimation step for, from the image signals received from said monocular camera ( 20n), of the model in the first place MI of the vehicle (50) and information II allowing blir a correspondence between at least one point of an image taken by one of the cameras (11 and 12) and an image taken by the monocular camera (20n), determining the coordinates of at least one point of said vehicle (50) to a second place of said area considered (110) and thus to locate it. 12) A method of locating according to claim 11, characterized in that said modeling step (31) consists of determining the coordinates (x, y, z) of a point (P,) or points (P,) of a discrete set of points which belong to the vehicle (50) considered or a part thereof, when it is in the first place and to determine said model in the first place MI as comprising said coordinates. [0013] 13) A locating method according to claim 12, characterized in that said one or more points (P,) of said discrete set of points belong to the same plane of a part of the real object, said model in the first place including in addition information defining said plan. [0014] 14) A method of locating according to one of claims 12 or 13, characterized in that said step of estimating position (32) consists in estimating the variation of laying said vehicle (50) between the first place of the first area (110) and the second locus of the second zone (120) and for deriving, for at least one point (P,) from the model at the first location MI, the coordinates of the point (P "i) at the second corresponding location. [0015] 15) A locating method according to claim 14, characterized in that, for estimating the pose variation of said vehicle, the position estimation step implements a method of non-linear optimization of the reprojection error. [0016] 16) A method of locating according to claim 11, characterized in that the modeling step consists in determining the coordinates (x, y, z) of at least one point P which belongs to the vehicle (50) in question or a part of the latter, when it is in the first place, and to determine said model in the first place MI as comprising, on the one hand, said coordinates and, on the other hand, information defining the plane of the road or the height of said point Pi above the road. [0017] 17) A method for estimating the average speed of a vehicle between two different and distant zones, said method comprising: a location step for locating the same vehicle at a first location of a first zone of said zones and secondly of a second zone of said zones, - a step for determining the duration of the journey of the vehicle between the first place and the second place, - a step for estimating the curvilinear distance between the first place and said secondly, - a step for estimating said average speed from said duration and said curvilinear distance, characterized in that said locating step is implemented according to a locating method according to one of claims 11 to 16. [0018] 18) Method for detecting the crossing of at least one line of a zone by a vehicle, comprising at least one step of detecting the crossing of said one or more lines, characterized in that the one or more detection steps are implementations according to a location method according to one of claims 11 to 16, said modeling step or said or each position estimation step comprising a tracking step for detecting the crossing of the line of the considered area. [0019] 19) Detection method according to claim 18, characterized in that it is provided to detect the crossing by said vehicle of a first line in a first zone and a second line in a second zone. [0020] 20) Method for measuring the instantaneous speed of a vehicle in a place of a zone, said method comprising a step of locating said vehicle in said location and a step of measuring the instantaneous speed of said vehicle at the location where it is located characterized in that said locating step is carried out according to a locating method according to one of claims 11 to 16. [0021] 21) Program written on a medium and intended to be loaded into a programmable device of a tracking system according to one of claims 1 to 7 or of an average speed estimation system according to claim 8, or of a system line-crossing detection system according to claim 9 or a system for measuring the instantaneous speed of a vehicle according to claim 10, said program comprising instructions or parts of code for implementing the steps of a location method or medium speed estimation or red light crossing detection or instantaneous speed measuring system according to one of claims 11 to 20, when said program is executed by said programmable device.
类似技术:
公开号 | 公开日 | 专利标题 EP2937812B1|2018-06-06|System for locating a single vehicle in a plurality of areas different from one another through which said vehicle passes consecutively FR3020616A1|2015-11-06|DEVICE FOR SIGNALING OBJECTS TO A NAVIGATION MODULE OF A VEHICLE EQUIPPED WITH SAID DEVICE FR2960082A1|2011-11-18|METHOD AND SYSTEM FOR MERGING DATA FROM IMAGE SENSORS AND MOTION OR POSITION SENSORS EP1802960B1|2011-07-06|Sight distance measuring device US20200356790A1|2020-11-12|Vehicle image verification WO2016045764A1|2016-03-31|Extrinsic calibration method for cameras of an on-board system for formation of stereo images CN109697860A|2019-04-30|Parking stall measure and tracking system and method and vehicle EP2043044B1|2010-05-26|Method and device for automobile parking assistance FR2977023A1|2012-12-28|GENERATION OF CARD DATA WO2015150129A1|2015-10-08|Method for geolocating the environment of a carrier FR2801123A1|2001-05-18|METHOD FOR THE AUTOMATIC CREATION OF A DIGITAL MODEL FROM COUPLES OF STEREOSCOPIC IMAGES EP1528409B1|2006-06-21|Method and system for measuring the speed of a vehicle on a curved trajectory EP2161677A1|2010-03-10|Method for detecting a target object for an automobile FR3065097B1|2019-06-21|AUTOMATED METHOD FOR RECOGNIZING AN OBJECT FR3052581B1|2019-07-12|METHOD FOR MAKING A DEPTH CARD FROM SUCCESSIVE IMAGES OF A SINGLE CAMERA | EMBARKED IN A MOTOR VEHICLE EP0604252B1|1997-05-14|Method for aiding piloting of a low flying aircraft Abi Farraj et al.2013|Non-iterative planar visual odometry using a monocular camera EP2952960A1|2015-12-09|3d shooting device and method FR2976091A1|2012-12-07|Method for automatic disconnection of cruise control system of vehicle approaching toll gate, involves deducing entry of vehicle in toll zone, and generating order for disconnecting cruise control system of vehicle JP7025276B2|2022-02-24|Positioning in urban environment using road markings FR3066850B1|2019-06-14|METHOD FOR VISUALIZATION IN THREE DIMENSIONS OF THE ENVIRONMENT OF A VEHICLE FR3078183A1|2019-08-23|METHOD FOR DETECTING AN OBJECT FROM AN INBOARD CAMERA IN A MOTOR VEHICLE WO2022033902A1|2022-02-17|Method for aligning at least two images formed by three-dimensional points EP2791687B1|2017-06-21|Method and system for auto-calibration of a device for measuring speed of a vehicle travelling in a three-dimensional space CN114072842A|2022-02-18|Method for determining depth from an image and related system
同族专利:
公开号 | 公开日 EP2937812B1|2018-06-06| EP2937812A1|2015-10-28| FR3020490B1|2019-04-12|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20050027435A1|2003-06-04|2005-02-03|Aps Technology Group, Inc.|Systems and methods for monitoring and tracking movement and location of shipping containers and vehicles using a vision based system| US20060078197A1|2004-10-01|2006-04-13|Omron Corporation|Image processing apparatus| WO2007107875A2|2006-03-22|2007-09-27|Kria S.R.L.|A system for detecting vehicles| US20120212617A1|2010-08-21|2012-08-23|American Traffic Solutions, Inc.|System and method for detecting traffic violations on restricted roadways| CN106251638A|2016-09-19|2016-12-21|昆山市工研院智能制造技术有限公司|Channelizing line violation snap-shooting system| US11233979B2|2020-06-18|2022-01-25|At&T Intellectual Property I, L.P.|Facilitation of collaborative monitoring of an event| US11037443B1|2020-06-26|2021-06-15|At&T Intellectual Property I, L.P.|Facilitation of collaborative vehicle warnings| US11184517B1|2020-06-26|2021-11-23|At&T Intellectual Property I, L.P.|Facilitation of collaborative camera field of view mapping|
法律状态:
2015-03-19| PLFP| Fee payment|Year of fee payment: 2 | 2015-10-30| PLSC| Publication of the preliminary search report|Effective date: 20151030 | 2016-03-23| PLFP| Fee payment|Year of fee payment: 3 | 2017-03-22| PLFP| Fee payment|Year of fee payment: 4 | 2018-03-22| PLFP| Fee payment|Year of fee payment: 5 | 2020-03-19| PLFP| Fee payment|Year of fee payment: 7 | 2021-03-23| PLFP| Fee payment|Year of fee payment: 8 | 2021-07-02| CA| Change of address|Effective date: 20210526 | 2021-07-02| CD| Change of name or company name|Owner name: IDEMIA IDENTITY & SECURITY FRANCE, FR Effective date: 20210526 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1453685A|FR3020490B1|2014-04-24|2014-04-24|SYSTEM FOR LOCATING THE SAME VEHICLE IN SEVERAL ZONES DIFFERENT FROM ONE TO THE OTHER IN WHICH THIS VEHICLE PASSES CONSECUTIVELY| FR1453685|2014-04-24|FR1453685A| FR3020490B1|2014-04-24|2014-04-24|SYSTEM FOR LOCATING THE SAME VEHICLE IN SEVERAL ZONES DIFFERENT FROM ONE TO THE OTHER IN WHICH THIS VEHICLE PASSES CONSECUTIVELY| EP15164368.1A| EP2937812B1|2014-04-24|2015-04-21|System for locating a single vehicle in a plurality of areas different from one another through which said vehicle passes consecutively| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|